24 research outputs found

    Belle II Pixel Detector Commissioning and Operational Experience

    Get PDF

    Status of the BELLE II Pixel Detector

    Get PDF
    The Belle II experiment at the super KEK B-factory (SuperKEKB) in Tsukuba, Japan, has been collecting e+ee^+e^− collision data since March 2019. Operating at a record-breaking luminosity of up to 4.7×1034cm2s14.7×10^{34} cm^{−2}s^{−1}, data corresponding to 424fb1424 fb^{−1} has since been recorded. The Belle II VerteX Detector (VXD) is central to the Belle II detector and its physics program and plays a crucial role in reconstructing precise primary and decay vertices. It consists of the outer 4-layer Silicon Vertex Detector (SVD) using double sided silicon strips and the inner two-layer PiXel Detector (PXD) based on the Depleted P-channel Field Effect Transistor (DePFET) technology. The PXD DePFET structure combines signal generation and amplification within pixels with a minimum pitch of (50×55)μm2(50×55) μm^2. A high gain and a high signal-to-noise ratio allow thinning the pixels to 75μm75 μm while retaining a high pixel hit efficiency of about 9999%. As a consequence, also the material budget of the full detector is kept low at 0.21≈0.21%XX0\frac{X}{X_0} per layer in the acceptance region. This also includes contributions from the control, Analog-to-Digital Converter (ADC), and data processing Application Specific Integrated Circuits (ASICs) as well as from cooling and support structures. This article will present the experience gained from four years of operating PXD; the first full scale detector employing the DePFET technology in High Energy Physics. Overall, the PXD has met the expectations. Operating in the intense SuperKEKB environment poses many challenges that will also be discussed. The current PXD system remains incomplete with only 20 out of 40 modules having been installed. A full replacement has been constructed and is currently in its final testing stage before it will be installed into Belle II during the ongoing long shutdown that will last throughout 2023

    The Developing Human Connectome Project Neonatal Data Release

    Get PDF
    The Developing Human Connectome Project has created a large open science resource which provides researchers with data for investigating typical and atypical brain development across the perinatal period. It has collected 1228 multimodal magnetic resonance imaging (MRI) brain datasets from 1173 fetal and/or neonatal participants, together with collateral demographic, clinical, family, neurocognitive and genomic data from 1173 participants, together with collateral demographic, clinical, family, neurocognitive and genomic data. All subjects were studied in utero and/or soon after birth on a single MRI scanner using specially developed scanning sequences which included novel motion-tolerant imaging methods. Imaging data are complemented by rich demographic, clinical, neurodevelopmental, and genomic information. The project is now releasing a large set of neonatal data; fetal data will be described and released separately. This release includes scans from 783 infants of whom: 583 were healthy infants born at term; as well as preterm infants; and infants at high risk of atypical neurocognitive development. Many infants were imaged more than once to provide longitudinal data, and the total number of datasets being released is 887. We now describe the dHCP image acquisition and processing protocols, summarize the available imaging and collateral data, and provide information on how the data can be accessed

    Classification and feature analysis of the Human Connectome Project dataset for differentiating between males and females

    Get PDF
    We analysed features relevant for differentiation between males and females based on the data available from the Human Connectome Project (HCP) S1200 dataset. We used 354 features containing cognitive and emotional measures as well as measures derived from task functional magnetic resonance imaging (MRI) and structural brain MRI. The paper presents a thorough analysis of this extensive set of features using a machine learning approach with a goal of identifying features that have the ability to differentiate between males and females. We used two state of the art classification algorithms with different properties: support vector machine (SVM) and random forest classifier (RFC). For each classifier the hyperparameters were obtained and classifiers were optimized using nested cross validation and grid search. This resulted in the classification performance of 91% and 89% accuracy using SVM and RFC, respectively. Using SHAP (SHapley Additive exPlanations) method we obtained relevance of features as indicators of sex differences and identified features with high discriminative power for sex classification. The majority of top features were brain morphological measures, and only a small proportion were features related to cognitive performance. Our results demonstrate the importance and advantages of using a machine learning approach when analysing sex differences

    Normative models for neuroimaging markers: impact of model selection, sample size and evaluation criteria

    No full text
    Modelling population reference curves or normative modelling is increasingly used with the advent of large neuroimaging studies. In this paper we assess the performance of fitting methods from the perspective of clinical applications and investigate the influence of the sample size. Further, we evaluate linear and non-linear models for percentile curve estimation and highlight how the bias-variance trade-off manifests in typical neuroimaging data. We created plausible ground truth distributions of hippocampal volumes in the age range of 45 to 80 years, as an example application. Based on these distributions we repeatedly simulated samples for sizes between 50 and 50,000 data points, and for each simulated sample we fitted a range of normative models. We compared the fitted models and their variability across repetitions to the ground truth, with specific focus on the outer percentiles (1st, 5th, 10th) as these are the most clinically relevant. Our results quantify the expected decreasing trend in variance of the volume estimates with increasing sample size. However, bias in the volume estimates only decreases a modest amount, without much improvement at large sample sizes. The uncertainty of model performance is substantial for what would often be considered large samples in a neuroimaging context and rises dramatically at the ends of the age range, where fewer data points exist. Flexible models perform better across sample sizes, especially for non-linear ground truth. Surprisingly large samples of several thousand data points are needed to accurately capture outlying percentiles across the age range for applications in research and clinical settings. Performance evaluation methods should assess both bias and variance. Furthermore, caution is needed when attempting to go near the ends of the age range captured by the source data set and, as is a well known general principle, extrapolation beyond the age range should always be avoided. To help with such evaluations of normative models we have made our code available to guide researchers developing or utilising normative models

    Comparison of Lesion Size Using Area and Volume in Full Field Digital Mammograms

    No full text
    Abstract. The size of a lesion is a feature often used in computer-aided detection systems for classification between benign and malignant lesions. However, size of a lesion presented by its area might not be as reliable as volume of a lesion. Volume is more independent of the view (CC or MLO) since it represents three dimensional information, whereas area refers only to the projection of a lesion on a two dimensional plane. Furthermore, volume might be better than area for comparing lesion size in two consecutive exams and for evaluating temporal change to distinguish benign and malignant lesions. We have used volumetric breast density estimation in digital mammograms to obtain thickness of dense tissue in regions of interest in order to compute volume of lesions. The dataset consisted of 382 mammogram pairs in CC and MLO views and 120 mammogram pairs for temporal analysis. The obtained correlation coefficients between the lesion size in the CC and MLO views were 0.70 (0.64-0.76) and 0.83 (0.79-0.86) for area and volume, respectively. Twotailed z-test showed a significant difference between two correlation coefficients (p=0.0001). The usage of area and volume in temporal analysis of mammograms has been evaluated using ROC analysis. The obtained values of the area under the curve (AUC) were 0.73 and 0.75 for area and volume, respectively. Although a higher AUC value for volume was found, this difference was not significant (p=0.16)

    Sample sizes and population differences in brain template construction

    No full text
    Spatial normalization or deformation to a standard brain template is routinely used as a key module in various pipelines for the processing of magnetic resonance imaging (MRI) data. Brain templates are often constructed using MRI data from a limited number of subjects. Individual brains show significant variabilities in their morphology; thus, sample sizes and population differences are two key factors that influence brain template construction. To address these influences, we employed two independent groups from the Human Connectome Project (HCP) and the Chinese Human Connectome Project (CHCP) to quantify the impacts of sample sizes and population on brain template construction. We first assessed the effect of sample size on the construction of volumetric brain templates using data subsets from the HCP and CHCP datasets. We applied a voxel-wise index of the deformation variability and a logarithmically transformed Jacobian determinant to quantify the variability associated with the template construction and modeled the brain template variability as a power function of the sample size. At the system level, the frontoparietal control network and dorsal attention network demonstrated higher deformation variability and logged Jacobian determinants, whereas other primary networks showed lower variability. To investigate the population differences, we constructed Caucasian and Chinese standard brain atlases (namely, US200 and CN200). The two demographically matched templates, particularly the language-related areas, exhibited dramatic bilaterally in supramarginal gyri and inferior frontal gyri differences in their deformation variability and logged Jacobian determinant. Using independent data from the HCP and CHCP, we examined the segmentation and registration accuracy and observed significant reduction in the performance of the brain segmentation and registration when the population-mismatched templates were used in the spatial normalization. Our findings provide evidence to support the use of population-matched templates in human brain mapping studies. The US200 and CN200 templates have been released on the Neuroimage Informatics Tools and Resources Clearinghouse (NITRC) website (https://www.nitrc.org/projects/us200_cn200/).</p

    Structural and functional asymmetry of the neonatal cerebral cortex

    No full text
    Features of brain asymmetry have been implicated in a broad range of cognitive processes; however, their origins are still poorly understood. Here we investigated cortical asymmetries in 442 healthy term-born neonates using structural and functional magnetic resonance images from the Developing Human Connectome Project. Our results demonstrate that the neonatal cortex is markedly asymmetric in both structure and function. Cortical asymmetries observed in the term cohort were contextualized in two ways: by comparing them against cortical asymmetries observed in 103 preterm neonates scanned at term-equivalent age, and by comparing structural asymmetries against those observed in 1,110 healthy young adults from the Human Connectome Project. While associations with preterm birth and biological sex were minimal, significant differences exist between birth and adulthood
    corecore